Red Hat Insights Core

Insights Core is a framework for collecting and processing data on systems in a standard way. The general idea is to collect data from files or commands, convert the output of each to a python object with an appropriate interface, optionally compose several of those objects into others to unify their APIs or provide a friendlier one, and finally to write policies that operate on any of those components.

You don't have to use all of the features Insights Core provides. You can use it as a simple file and command collector. But the real power comes from the catalogues of built-in components that go beyond simple data collection to expose proper objects to your own extensions of the system.

To accomplish this, Insights Core uses an internal dependency resolution engine. Components in the form of class or function definitions describe dependencies on other components with decorators, and the resulting dependency graphs can be executed once all components you care about have been loaded.

This is an introduction to the dependency resolution system followed by a tour of the standard components Insights Core provides.

Components

To make a component, we first have to create a component type, which is a decorator used to define components.


In [1]:
from insights.core import dr

In [2]:
# Here's our component type with the clever name "component."

# We could have named it anything. Insights Core provides several types
# that we'll come to later.
component = dr.new_component_type("component")

Do I have to give it a name?

You don't have to, but because the return value of dr.new_component_type is a function that's created on demand, every component type will have the same name and belong to an internal module unless you give it one.


In [3]:
blah = dr.new_component_type()
stuff = dr.new_component_type()

print dr.get_name(blah)
print dr.get_name(stuff)
print dr.get_name(component)


insights.core.dr.decorator
insights.core.dr.decorator
__main__.component

How do I use it?


In [4]:
import random

# Make two components with no dependencies
@component()
def rand():
    return random.random()

@component()
def three():
    return 3

# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
    return x * y

In [5]:
# Now that we have a few components defined, let's run them.

from pprint import pprint

# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker.
broker = dr.run()
pprint(broker.instances)


{<function three at 0x7f7c4f690140>: 3,
 <function rand at 0x7f7c4f690488>: 0.8383122474151941,
 <function mul_things at 0x7f7c4f690500>: 2.514936742245582}

Component Types

We can define components of different types by creating different decorators.


In [6]:
stage = dr.new_component_type("stage")

In [7]:
@stage(mul_things)
def spam(m):
    return int(m)

In [8]:
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))

print
print "Stages"
pprint(broker.get_by_type(stage))


All Instances
{<function three at 0x7f7c4f690140>: 3,
 <function rand at 0x7f7c4f690488>: 0.4201753614228262,
 <function mul_things at 0x7f7c4f690500>: 1.2605260842684785,
 <function spam at 0x7f7c4f690a28>: 1}

Components
{<function three at 0x7f7c4f690140>: 3,
 <function rand at 0x7f7c4f690488>: 0.4201753614228262,
 <function mul_things at 0x7f7c4f690500>: 1.2605260842684785}

Stages
{<function spam at 0x7f7c4f690a28>: 1}

Executors

What happens if you specify executor=dr.broker_executor in the type definition? Your component will get the broker object that's carrying the state of the evaluation up to the point the component is called.


In [9]:
thing = dr.new_component_type("thing", executor=dr.broker_executor)

@thing(rand, three)
def stuff(broker):
    r = broker[rand]
    t = broker[three]
    return r + t

In [10]:
broker = dr.run()
print broker[stuff]


3.51968636979

Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.

Exception Handling

When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We'll come to that later.


In [11]:
@stage()
def boom():
    raise Exception("Boom!")

broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t


Exception('Boom!',)

Traceback (most recent call last):
  File "insights/core/dr.py", line 633, in run
    result = DELEGATES[component](broker)
  File "insights/core/dr.py", line 592, in __f
    return executor(func, broker, requires, optional)
  File "insights/core/dr.py", line 534, in default_executor
    return func(*args)
  File "<ipython-input-11-e4e05f37145d>", line 3, in boom
    raise Exception("Boom!")
Exception: Boom!

Missing Dependencies

A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing required dependencies. The second is a list of all dependencies of which at least one was required.


In [12]:
@stage("where's my stuff at?")
def missing_stuff(s):
    return s

broker = dr.run()
print broker.missing_requirements[missing_stuff]


(["where's my stuff at?"], [])

In [13]:
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
    return a + b + c + d + e + f

broker = dr.run()
print broker.missing_requirements[missing_more_stuff]


(['a', 'b'], [['e', 'f']])

Notice that the first elements in the dependency list after @stage are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.

SkipComponent

Components that raise dr.SkipComponent won't have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.

Optional Dependencies

There's an "optional" keyword that takes a list of components that should be run before the current one. If they throw exceptions or don't run for some other reason, execute the current component anyway and just say they were None.


In [14]:
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
    return (int(r*10.0) < 5.0, t)

broker = dr.run()
print broker[is_greater_than_ten]


(True, None)

auto_requires and auto_optional

The definition of a component type may take two other keywords: auto_requires and auto_optional. Their specifications are the same as the requires and optional portions of the component decorators. Any component decorated with a component type that has auto_requires or auto_optional will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.


In [15]:
mything = dr.new_component_type("mything", auto_requires=[rand])

@mything()
def dothings(r):
    return 4 * r

broker = dr.run(broker=broker)

pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))


1.4529321430782263
set([<function rand at 0x7f7c4f690488>])

Metadata

Component types and components can define metadata in their definitions. If a component's type defines metadata, that metadata is inherited by the component, although the component may override it.


In [16]:
anotherthing = dr.new_component_type("anotherthing", type_metadata={"a": 2, "b": 3})

@anotherthing(metadata={"b": 4, "c": 5})
def four():
    return 4

dr.get_metadata(four)


Out[16]:
{'a': 2, 'b': 4, 'c': 5}

Component Groups

So far we haven't said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.

All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when explicitly specified.


In [17]:
grouped = dr.new_component_type("grouped", group="grouped")

@grouped()
def five():
    return 5

b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)


{<function five at 0x7f7c4eaf17d0>: 5}

If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn't provided.

Aliases

When a component is defined, it can be given an alias on which other components can depend instead of depending directly on the component. This is sometimes useful but is discouraged.


In [18]:
@stage(alias="six")
def six():
    return 6

@stage("six")
def times_two(s):
    return 2 * s

br = dr.Broker()
br = dr.run(broker=br)
pprint(br[times_two])


12

run_incremental

Since hundreds or even thousands of dependencies can be defined, it's sometimes useful to separate them into graphs that don't share any components and execute those graphs one at a time. In addition to the run function, the dr module also provides a run_incremental function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.

Inspecting Components

The dr module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.


In [19]:
from insights.core import dr
print "Alias (six):", dr.get_alias(six)

# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (six):", dr.get_base_module_name(six)

print "\nComponent Type (six):", dr.get_component_type(six)

print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))

print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))

print "\nDependents (rand): "
pprint(dr.get_dependents(rand))

print "\nGroup (six):", dr.get_group(six)

print "\nMetadata (four): ",
pprint(dr.get_metadata(four))

# prints the full module name of the component
print "\nModule Name (six):", dr.get_module_name(six)

# prints the module name joined to the component name by a "."
print "\nName (six):", dr.get_name(six)

print "\nSimple Name (six):", dr.get_simple_name(six)


Alias (six): six

Module (six): __main__

Component Type (six): <function stage at 0x7f7c64874230>

Dependencies (times_two): 
set([<function six at 0x7f7c4eaf15f0>])

Dependency Graph (stuff): 
{<function three at 0x7f7c4f690140>: set([]),
 <function rand at 0x7f7c4f690488>: set([]),
 <function stuff at 0x7f7c4f690b90>: set([<function three at 0x7f7c4f690140>,
                                          <function rand at 0x7f7c4f690488>])}

Dependents (rand): 
set([<function missing_more_stuff at 0x7f7c4eaf12a8>,
     <function is_greater_than_ten at 0x7f7c4eaf1410>,
     <function mul_things at 0x7f7c4f690500>,
     <function stuff at 0x7f7c4f690b90>,
     <function dothings at 0x7f7c4f690ed8>])

Group (six): 0

Metadata (four): {'a': 2, 'b': 4, 'c': 5}
 
Module Name (six): __main__

Name (six): __main__.six

Simple Name (six): six

Loading Components

If you have components defined in a package and the root of that path is in sys.path, you can load the package and all its subpackages and modules by calling dr.load_components. This way you don't have to load every component module individually.

# recursively load all packages and modules in path.to.package
dr.load_components("path.to.package")

# or load a single module
dr.load_components("path.to.package.module")

Now that you know the basics of Insights Core dependency resolution, let's move on to the rest of Core that builds on it.

Standard Component Types

The standard component types provided by Insights Core are datasource, parser, combiner, rule, condition, and incident. They're defined in insights.core.plugins.

Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.

For more information on parser, combiner, and rule development, please see our component developer tutorials.

Datasource

A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we've streamlined the process of creating them.

Datasources are defined either with the @datasource decorator or a SpecFactory from insights.core.spec_factory.

A SpecFactory has a handful of functions for defining common datasource types.

  • simple_file
  • glob_file
  • simple_command
  • listdir
  • foreach
  • first_file
  • first_of

All datasources defined with a SpecFactory will depend on a ExecutionContext of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.

For now, we'll use a HostContext. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in insights.core.contexts.

All file collection datasources depend on any context that provides a path to use as root unless a particular context is explicitly specified. In other words, some datasources will activate for multiple contexts unless told otherwise.

simple_file

simple_file reads a file from the file system and makes it available as a TextFileProvider. A TextFileProvider instance contains the path to the file and its content as a list of lines.


In [20]:
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import SpecFactory

sf = SpecFactory()
release = sf.simple_file("/etc/redhat-release", name="release")
hostname = sf.simple_file("/etc/hostname", name="hostname")

ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx

broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content


/etc/redhat-release ['Fedora release 25 (Twenty Five)']
/etc/hostname ['localhost.localdomain']

glob_file

glob_file accepts glob patterns and evaluates at runtime to a list of TextFileProvider instances, one for each match. You can pass glob_file a single pattern or a list (or set) of patterns. It also accepts an ignore keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don't want.


In [21]:
host_stuff = sf.glob_file("/etc/host*", ignore="(allow|deny)", name="host_stuff")
broker = dr.run(broker=broker)
print broker[host_stuff]


[TextFileProvider("/etc/hostname"), TextFileProvider("/etc/host.conf"), TextFileProvider("/etc/hosts")]

simple_command

simple_command allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.

It and other command datasources return a CommandOutputProvider instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the keep_rc=True keyword, and the command output as a list of lines.

simple_command also accepts a timeout keyword, which is the maximum number of seconds the system should attempt to execute the command before a CalledProcessError is raised for the component.

A default timeout for all commands can be set on the initial ExecutionContext instance with the timeout keyword argument.

If a timeout isn't specified in the ExecutionContext or on the command itself, none is used.


In [22]:
uptime = sf.simple_command("/usr/bin/uptime", name="uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)


('/usr/bin/uptime', None, None, [' 10:08:01 up 41 days, 18:13,  1 user,  load average: 1.09, 1.02, 1.70'])

listdir

listdir lets you get the contents of a directory.


In [23]:
interfaces = sf.listdir("/sys/class/net", name="interfaces")
broker = dr.run(broker=broker)
pprint(broker[interfaces])


['lo', 'virbr0', 'wlp3s0', 'enp0s31f6', 'tun0', 'vboxnet0', 'virbr0-nic']

foreach

foreach allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.

The timeout description provided in the simple_command section applies here to each seperate invocation.


In [24]:
ethtool = sf.foreach(interfaces, "ethtool %s", name="ethtool")
broker = dr.run(broker=broker)
pprint(broker[ethtool])


[CommandOutputProvider("ethtool lo"),
 CommandOutputProvider("ethtool virbr0"),
 CommandOutputProvider("ethtool wlp3s0"),
 CommandOutputProvider("ethtool enp0s31f6"),
 CommandOutputProvider("ethtool tun0"),
 CommandOutputProvider("ethtool vboxnet0"),
 CommandOutputProvider("ethtool virbr0-nic")]

Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.

first_file

first_file takes a list of file names and returns a TextFileProvider for the first one it finds. This is useful if you're looking for a single file that might be in different locations.

first_of

first_of is a way to express that you want to use any datasource from a list of datasources you've already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.

For example, the way you collect installed rpms directly from a machine differs from how you would collect them from say, a docker image. Ultimately, downstream components may not care how the data is acquired. They just want rpm data.

You could do the following. Notice that host_rpms and docker_installed_rpms implement different ways of getting rpm data that depend on different contexts, but the final installed_rpms datasource just references whichever one ran, and other components needing rpm info don't have to care.


In [25]:
from insights.config import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider

rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format

host_rpms = sf.simple_command(cmd, name="host_rpms", context=HostContext)

@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
    root = ctx.root
    cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
    result = ctx.shell_out(cmd)
    return CommandOutputProvider(cmd, ctx, content=result)

installed_rpms = sf.first_of([host_rpms, docker_installed_rpms])

broker = dr.run(broker=broker)
pprint(broker[installed_rpms])


CommandOutputProvider("/usr/bin/rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}	%{INSTALLTIME:date}	%{BUILDTIME}	%{RSAHEADER:pgpsig}	%{DSAHEADER:pgpsig}
'")

What datasources does Insights Core provide?

To see a list of datasources we already collect, have a look in insights.specs.

Overriding Standard Datasources

You can override any of the datasources that we have defined in insights.specs. Normally, you wouldn't want to do this because we also provide many downstream components that depend on them. However, we realize that sometimes you want to take advantage of those components but provide the data they expect in a way we didn't anticipate.

Overriding a standard datasource in insights.specs is easy. Create a SpecFactory with the string "insights.specs" as an argument. Any datasource you create through the factory will override the datasource with the same name keyword in insights.specs.

Parsers

Parsers are the next major component type Insights Core provides. A Parser depends on a single datasource and is responsible for converting its raw content into a structured object.

Let's build a simple parser.


In [26]:
from insights.core import Parser
from insights.core.plugins import parser

@parser(hostname)
class HostnameParser(Parser):
    def parse_content(self, content):
        self.host, _, self.domain = content[0].partition(".")

broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host


Host: localhost

Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.

Our hostname parser is pretty simple, but it's easy to see how parsing things like rpm data or configuration files could get complicated.

Speaking of rpms, hopefully it's also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.

What about parser dependencies that produce lists of components?

Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It's important to keep this in mind when developing components that depend on parsers.

This is also why exceptions raised by components are stored as lists by component instead of single values.

Here's a simple parser that depends on the ethtool datasource.


In [27]:
@parser(ethtool)
class Ethtool(Parser):
    def parse_content(self, content):
        self.link_detected = None
        self.device = None
        for line in content:
            if "Settings for" in line:
                self.device = line.split(" ")[-1].strip(":")
            if "Link detected" in line:
                self.link_detected = line.split(":")[-1].strip()
                
broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
    print "Device:", eth.device
    print "Link? :", eth.link_detected, "\n"


Device: lo
Link? : yes 

Device: virbr0
Link? : no 

Device: wlp3s0
Link? : yes 

Device: enp0s31f6
Link? : no 

Device: tun0
Link? : yes 

Device: vboxnet0
Link? : yes 

Device: virbr0-nic
Link? : no 

We provide curated parsers for all of our datasources. They're in insights.parsers.

Combiners

Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.

As an example of standardizing interfaces, chkconfig and service commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service's status, not about how a particular program exposes it. A combiner can depend on both chkconfig and service parsers (like this, so only one of them is required: @combiner([[chkconfig, service]])) and provide a unified interface to the data.

As an example of a higher level view of several related components, imagine a combiner that depends on various ethtool and other network information gathering parsers. It can compile all of that information behind one view, exposing a range of information about devices, interfaces, iptables, etc. that might otherwise be scattered across a system.

We provide a few common combiners. They're in insights.combiners.

Here's an example combiner that tries a few different ways to determine the Red Hat release information. Notice that its dependency declarations and interface are just like we've discussed before. If this was a class, the __init__ function would be declared like def __init__(self, rh_release, un).

from collections import namedtuple                                              
from insights.core.plugins import combiner                                      
from insights.parsers.redhat_release import RedhatRelease as rht_release        
from insights.parsers.uname import Uname

@combiner([rht_release, Uname])                                                 
def redhat_release(rh_release, un):                                             
    if un and un.release_tuple[0] != -1:                                        
        return Release(*un.release_tuple)                                       

    if rh_release:                                                              
        return Release(rh_release.major, rh_release.minor)                      

    raise Exception("Unabled to determine release.")

Rules

Rules depend on parsers and/or combiners and encapsulate particular policies about their state. For example, a rule might detect whether a defective rpm is installed. It might also inspect the lsof parser to determine if a process is using a file from that defective rpm. It could also check network information to see if the process is a server and whether it's bound to an internal or external IP address. Rules can check for anything you can surface in a parser or a combiner.

Rules use the make_response function to create their return values. It takes one required parameter, which is a key identifying the particular state the rule wants to highlight, and any number of required parameters that provide context for that state.


In [28]:
from insights.core.plugins import rule, make_response

ERROR_KEY = "HOST_IS_LOCALHOST"

@rule(HostnameParser)
def report(hn):
    if "localhost" in hn.host:
        return make_response(ERROR_KEY)

    
brok = dr.Broker()
brok[HostContext] = HostContext()

brok = dr.run(broker=brok)
pprint(brok.get(report))


{'error_key': 'HOST_IS_LOCALHOST', 'type': 'rule'}

Conditions and Incidents

Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.

Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be "Does the kdump configuration contain a 'net' target type?" or "Is the operating system Red Hat Enterprise Linux 7?"

Incidents, on the other hand, typically are specific types of warning or error messages from log type files.

Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.

Serialization

Say we write a pile of components and then run them all on a system. Then what? It would be nice if we could get dictionary representations of our components that we could serialize and ship somewhere like a REST endpoint or a storage location.

We've written a simple serialization framework to help with this process.

Let's start with our HostnameParser above.


In [29]:
from insights.core.serde import serializer, deserializer, marshal, unmarshal

@serializer(HostnameParser)
def ser_hostname(hn):
    d = {}
    d["host"] = hn.host
    d["domain"] = hn.domain
    return d

@deserializer(HostnameParser)
def des_hostname(_type, data):
    hn = _type.__new__(_type)
    hn.host = data["host"]
    hn.domain = data["domain"]
    return hn

In [30]:
data = marshal(broker[HostnameParser])
pprint(data)


{'object': {'domain': 'localdomain', 'host': 'localhost'},
 'type': '__main__.HostnameParser'}

In [31]:
hn = unmarshal(data)
print "Host:", hn.host
print "Domain:", hn.domain


Host: localhost
Domain: localdomain

Serialization works by looking for a function registered to handle the class of the given object. If a function isn't found, it looks for a serializer registered for the next object in the class MRO list. This continues all the way up to object. If one still isn't found, we punt and just return the object as it is. Deserialization works in the same way.

This means if we have components that share a base class, we can sometimes write serde functions for the base class and not have to write separate ones for all the subclasses.

For example, serialization and deserialization functions exist for the Parser super class that work on the output of vars. That means that any Parser subclass with attributes of simple types can be automatically serialized and deserialized.

We already provide serde functions for all rules and all datasources created with a SpecFactory.


In [32]:
rep = marshal(brok[report])
pprint(rep)

eth = marshal(broker[Ethtool])
pprint(eth)


{'object': {'error_key': 'HOST_IS_LOCALHOST', 'type': 'rule'}, 'type': None}
[{'object': {'args': 'lo',
             'device': 'lo',
             'file_name': 'ethtool_lo',
             'file_path': 'insights_commands/ethtool_lo',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'virbr0',
             'device': 'virbr0',
             'file_name': 'ethtool_virbr0',
             'file_path': 'insights_commands/ethtool_virbr0',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'wlp3s0',
             'device': 'wlp3s0',
             'file_name': 'ethtool_wlp3s0',
             'file_path': 'insights_commands/ethtool_wlp3s0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'enp0s31f6',
             'device': 'enp0s31f6',
             'file_name': 'ethtool_enp0s31f6',
             'file_path': 'insights_commands/ethtool_enp0s31f6',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'tun0',
             'device': 'tun0',
             'file_name': 'ethtool_tun0',
             'file_path': 'insights_commands/ethtool_tun0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'vboxnet0',
             'device': 'vboxnet0',
             'file_name': 'ethtool_vboxnet0',
             'file_path': 'insights_commands/ethtool_vboxnet0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'virbr0-nic',
             'device': 'virbr0-nic',
             'file_name': 'ethtool_virbr0-nic',
             'file_path': 'insights_commands/ethtool_virbr0-nic',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'}]

In [33]:
et = unmarshal(data)
pprint(et)


<__main__.HostnameParser object at 0x7f7c4eb17f50>

Observers

Insights Core allows you to attach functions to component types, and they'll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.

Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc.


In [34]:
def observer(c, broker):
    if c not in broker:
        return
    
    value = broker[c]
    pprint(value)
    
broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)


<__main__.HostnameParser object at 0x7f7c4eb17850>
[<__main__.Ethtool object at 0x7f7c4ea92090>,
 <__main__.Ethtool object at 0x7f7c4ea920d0>,
 <__main__.Ethtool object at 0x7f7c4ea92110>,
 <__main__.Ethtool object at 0x7f7c4ea92150>,
 <__main__.Ethtool object at 0x7f7c4ea92190>,
 <__main__.Ethtool object at 0x7f7c4ea921d0>,
 <__main__.Ethtool object at 0x7f7c4ea92210>]

Combining Serialization and Observers

We can combine concepts from the last two sections to serialize components based on their type. The step from here to writing the data to yaml (or anything else) is trivial.


In [35]:
def marshaller(c, broker):
    if c not in broker:
        return
    
    value = broker[c]
    data = marshal(value)
    pprint(data)
    
broker.add_observer(marshaller, component_type=parser)
broker = dr.run(broker=broker)

# Notice that the previous observer hooked onto the broker from the Observers section
# also fires here. You can add as many observers as you want.


<__main__.HostnameParser object at 0x7f7c4eb17850>
{'object': {'domain': 'localdomain', 'host': 'localhost'},
 'type': '__main__.HostnameParser'}
[<__main__.Ethtool object at 0x7f7c4ea92090>,
 <__main__.Ethtool object at 0x7f7c4ea920d0>,
 <__main__.Ethtool object at 0x7f7c4ea92110>,
 <__main__.Ethtool object at 0x7f7c4ea92150>,
 <__main__.Ethtool object at 0x7f7c4ea92190>,
 <__main__.Ethtool object at 0x7f7c4ea921d0>,
 <__main__.Ethtool object at 0x7f7c4ea92210>]
[{'object': {'args': 'lo',
             'device': 'lo',
             'file_name': 'ethtool_lo',
             'file_path': 'insights_commands/ethtool_lo',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'virbr0',
             'device': 'virbr0',
             'file_name': 'ethtool_virbr0',
             'file_path': 'insights_commands/ethtool_virbr0',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'wlp3s0',
             'device': 'wlp3s0',
             'file_name': 'ethtool_wlp3s0',
             'file_path': 'insights_commands/ethtool_wlp3s0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'enp0s31f6',
             'device': 'enp0s31f6',
             'file_name': 'ethtool_enp0s31f6',
             'file_path': 'insights_commands/ethtool_enp0s31f6',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'tun0',
             'device': 'tun0',
             'file_name': 'ethtool_tun0',
             'file_path': 'insights_commands/ethtool_tun0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'vboxnet0',
             'device': 'vboxnet0',
             'file_name': 'ethtool_vboxnet0',
             'file_path': 'insights_commands/ethtool_vboxnet0',
             'last_client_run': None,
             'link_detected': 'yes'},
  'type': '__main__.Ethtool'},
 {'object': {'args': 'virbr0-nic',
             'device': 'virbr0-nic',
             'file_name': 'ethtool_virbr0-nic',
             'file_path': 'insights_commands/ethtool_virbr0-nic',
             'last_client_run': None,
             'link_detected': 'no'},
  'type': '__main__.Ethtool'}]